Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 19(8): e1011342, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37603559

RESUMO

Bayesian Active Learning (BAL) is an efficient framework for learning the parameters of a model, in which input stimuli are selected to maximize the mutual information between the observations and the unknown parameters. However, the applicability of BAL to experiments is limited as it requires performing high-dimensional integrations and optimizations in real time. Current methods are either too time consuming, or only applicable to specific models. Here, we propose an Efficient Sampling-Based Bayesian Active Learning (ESB-BAL) framework, which is efficient enough to be used in real-time biological experiments. We apply our method to the problem of estimating the parameters of a chemical synapse from the postsynaptic responses to evoked presynaptic action potentials. Using synthetic data and synaptic whole-cell patch-clamp recordings, we show that our method can improve the precision of model-based inferences, thereby paving the way towards more systematic and efficient experimental designs in physiology.


Assuntos
Aprendizagem Baseada em Problemas , Projetos de Pesquisa , Teorema de Bayes , Potenciais de Ação , Técnicas de Patch-Clamp
2.
PLoS Comput Biol ; 18(2): e1009721, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35196324

RESUMO

Most normative models in computational neuroscience describe the task of learning as the optimisation of a cost function with respect to a set of parameters. However, learning as optimisation fails to account for a time-varying environment during the learning process and the resulting point estimate in parameter space does not account for uncertainty. Here, we frame learning as filtering, i.e., a principled method for including time and parameter uncertainty. We derive the filtering-based learning rule for a spiking neuronal network-the Synaptic Filter-and show its computational and biological relevance. For the computational relevance, we show that filtering improves the weight estimation performance compared to a gradient learning rule with optimal learning rate. The dynamics of the mean of the Synaptic Filter is consistent with spike-timing dependent plasticity (STDP) while the dynamics of the variance makes novel predictions regarding spike-timing dependent changes of EPSP variability. Moreover, the Synaptic Filter explains experimentally observed negative correlations between homo- and heterosynaptic plasticity.


Assuntos
Modelos Neurológicos , Plasticidade Neuronal , Potenciais de Ação/fisiologia , Algoritmos , Aprendizagem/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia
3.
PLoS Comput Biol ; 16(4): e1007640, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32271761

RESUMO

This is a PLOS Computational Biology Education paper. The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Because a cost function by itself does not predict how the brain finds its minima, additional assumptions about the optimization method need to be made to predict the dynamics of physiological quantities. In this context, steepest descent (also called gradient descent) is often suggested as an algorithmic principle of optimization potentially implemented by the brain. In practice, researchers often consider the vector of partial derivatives as the gradient. However, the definition of the gradient and the notion of a steepest direction depend on the choice of a metric. Because the choice of the metric involves a large number of degrees of freedom, the predictive power of models that are based on gradient descent must be called into question, unless there are strong constraints on the choice of the metric. Here, we provide a didactic review of the mathematics of gradient descent, illustrate common pitfalls of using gradient descent as a principle of brain function with examples from the literature, and propose ways forward to constrain the metric.


Assuntos
Biofísica/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Biologia Computacional/métodos , Algoritmos , Simulação por Computador , Humanos , Processamento de Imagem Assistida por Computador , Cinética , Matemática , Redes Neurais de Computação , Neurociências/métodos
4.
Sci Rep ; 7(1): 17585, 2017 12 11.
Artigo em Inglês | MEDLINE | ID: mdl-29229925

RESUMO

A correction to this article has been published and is linked from the HTML version of this paper. The error has been fixed in the paper.

5.
Sci Rep ; 7(1): 8722, 2017 08 18.
Artigo em Inglês | MEDLINE | ID: mdl-28821729

RESUMO

The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals' performance in many tasks is consistent with such a Bayesian statistical interpretation. However, it is presently unclear how a nonlinear Bayesian filter can be efficiently implemented in a network of neurons that satisfies some minimum constraints of biological plausibility. Here, we propose the Neural Particle Filter (NPF), a sampling-based nonlinear Bayesian filter, which does not rely on importance weights. We show that this filter can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons. Further, it captures properties of temporal and multi-sensory integration that are crucial for perception, and it allows for online parameter learning with a maximum likelihood approach. The NPF holds the promise to avoid the 'curse of dimensionality', and we demonstrate numerically its capability to outperform weighted particle filters in higher dimensions and when the number of particles is limited.


Assuntos
Aprendizagem , Neurônios/fisiologia , Dinâmica não Linear , Percepção/fisiologia , Algoritmos , Teorema de Bayes , Modelos Neurológicos , Sensação
6.
PLoS One ; 10(11): e0142435, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26571371

RESUMO

Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.


Assuntos
Neurônios/fisiologia , Potenciais de Ação/fisiologia , Algoritmos , Animais , Simulação por Computador , Análise de Fourier , Modelos Lineares , Potenciais da Membrana , Modelos Neurológicos , Neurônios/metabolismo , Distribuição Normal , Probabilidade , Processos Estocásticos , Sinapses/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...